144 research outputs found

    Soft Sequence Heaps

    Full text link
    Chazelle [JACM00] introduced the soft heap as a building block for efficient minimum spanning tree algorithms, and recently Kaplan et al. [SOSA2019] showed how soft heaps can be applied to achieve simpler algorithms for various selection problems. A soft heap trades-off accuracy for efficiency, by allowing ϵN\epsilon N of the items in a heap to be corrupted after a total of NN insertions, where a corrupted item is an item with artificially increased key and 0<ϵ1/20 < \epsilon \leq 1/2 is a fixed error parameter. Chazelle's soft heaps are based on binomial trees and support insertions in amortized O(lg(1/ϵ))O(\lg(1/\epsilon)) time and extract-min operations in amortized O(1)O(1) time. In this paper we explore the design space of soft heaps. The main contribution of this paper is an alternative soft heap implementation based on merging sorted sequences, with time bounds matching those of Chazelle's soft heaps. We also discuss a variation of the soft heap by Kaplan et al. [SICOMP2013], where we avoid performing insertions lazily. It is based on ternary trees instead of binary trees and matches the time bounds of Kaplan et al., i.e. amortized O(1)O(1) insertions and amortized O(lg(1/ϵ))O(\lg(1/\epsilon)) extract-min. Both our data structures only introduce corruptions after extract-min operations which return the set of items corrupted by the operation.Comment: 16 pages, 3 figure

    PArtially Persistent Data Structures of Bounded Degree with Constant Update Time

    Get PDF
    The problem of making bounded in-degree and out-degree data structures partially persistent is considered. The node copying method of Driscoll et al. is extended so that updates can be performed in worst-case constant time on the pointer machine model. Previously it was only known to be possible in amortised constant time [Driscoll89]. The result is presented in terms of a new strategy for Dietz and Raman's dynamic two player pebble game on graphs. It is shown how to implement the strategy, and the upper bound on the required number of pebbles is improved from 2b + 2d + O(sqrt(b)) to d + 2b, where b is the bound of the in-degree and d the bound of the out-degree. We also give a lower bound that shows that the number of pebbles depends on the out-degree d

    Cache-Oblivious Implicit Predecessor Dictionaries with the Working Set Property

    Get PDF
    In this paper we present an implicit dynamic dictionary with the working-set property, supporting insert(e) and delete(e) in O(log n) time, predecessor(e) in O(log l_{p(e)}) time, successor(e) in O(log l_{s(e)}) time and search(e) in O(log min(l_{p(e)},l_{e}, l_{s(e)})) time, where n is the number of elements stored in the dictionary, l_{e} is the number of distinct elements searched for since element e was last searched for and p(e) and s(e) are the predecessor and successor of e, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size n using no additional space. In the cache-oblivious model the log is base B and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures required O(log n) time.Comment: An extended abstract is accepted at STACS 2012, this is the full version of that paper with the same name "Cache-Oblivious Implicit Predecessor Dictionaries with the Working-Set Property", Symposium on Theoretical Aspects of Computer Science 201

    Deterministic Cache-Oblivious Funnelselect

    Full text link
    In the multiple-selection problem one is given an unsorted array SS of NN elements and an array of qq query ranks r1<<rqr_1<\cdots<r_q, and the task is to return, in sorted order, the qq elements in SS of rank r1,,rqr_1, \ldots, r_q, respectively. The asymptotic deterministic comparison complexity of the problem was settled by Dobkin and Munro [JACM 1981]. In the I/O model an optimal I/O complexity was achieved by Hu et al. [SPAA 2014]. Recently [ESA 2023], we presented a cache-oblivious algorithm with matching I/O complexity, named funnelselect, since it heavily borrows ideas from the cache-oblivious sorting algorithm funnelsort from the seminal paper by Frigo, Leiserson, Prokop and Ramachandran [FOCS 1999]. Funnelselect is inherently randomized as it relies on sampling for cheaply finding many good pivots. In this paper we present deterministic funnelselect, achieving the same optional I/O complexity cache-obliviously without randomization. Our new algorithm essentially replaces a single (in expectation) reversed-funnel computation using random pivots by a recursive algorithm using multiple reversed-funnel computations. To meet the I/O bound, this requires a carefully chosen subproblem size based on the entropy of the sequence of query ranks; deterministic funnelselect thus raises distinct technical challenges not met by randomized funnelselect. The resulting worst-case I/O bound is O(i=1q+1ΔiBlogM/BNΔi+NB)O\bigl(\sum_{i=1}^{q+1} \frac{\Delta_i}{B} \cdot \log_{M/B} \frac{N}{\Delta_i} + \frac{N}{B}\bigr), where BB is the external memory block size, MB1+ϵM\geq B^{1+\epsilon} is the internal memory size, for some constant ϵ>0\epsilon>0, and Δi=riri1\Delta_i = r_{i} - r_{i-1} (assuming r0=0r_0=0 and rq+1=N+1r_{q+1}=N + 1)

    Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

    Get PDF
    We present improved cache-oblivious data structures and algorithms for breadth-first search (BFS) on undirected graphs and the single-source shortest path (SSSP) problem on undirected graphs with non-negative edge weights. For the SSSP problem, our result closes the performance gap between the currently best cache-aware algorithm and the cache-oblivious counterpart. Our cache-oblivious SSSP-algorithm takes nearly full advantage of block transfers for dense graphs. The algorithm relies on a new data structure, called bucket heap, which is the first cache-oblivious priority queue to efficiently support a weak DECREASEKEY operation. For the BFS problem, we reduce the number of I/Os for sparse graphs by a factor of nearly sqrt{B}, where B is the cache-block size, nearly closing the performance gap between the currently best cache-aware and cache-oblivious algorithms

    I/O-efficient dynamic point location in monotone planar subdivisions

    Get PDF
    We present an efficient external-memory dynamic data structure for point location in monotone planar subdivisions. Our data structure uses O(N/B) disk blocks to store a monotone subdivision of size N, where B is the size of a disk block. It supports queries in O(logi N) I/OS (worst-case) and updates in O(lo& N) I/OS (amortized). We also propose a new variant of B-trees, called leuelbalanced B-trees, which allow insert, delete, merge, and split operations in O((l+ logM,Bf)log,N)I/OS(amortized),25b2B/2,evenifeachnodestoresapointertoitsparent.HereMisthesizeofmainmemory.Besidesbeingessentialtoourpointlocationdatastructure,webelievethatlevelbalancedBtreesareofsignificantindependentinterest.Theycan,forexample,beusedtodynamicallymaintainaplanarStgraphusingO((1+ logM,B f) log, N) I/OS (amortized), 2 5 b 2 B/2, even if each node stores a pointer to its parent. Here M is the size of main memory. Besides being essential to our point-location data structure, we believe that level-balanced B-trees are of significant independent interest. They can, for example, be used to dynamically maintain a planar St-graph using O((1 + 10g~,~ ) log, N) = O(logi N) I/OS (amortized) per update, so that reachability queries can be answered in O(log, N) I/OS (worst case)
    corecore